Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available November 8, 2025
- 
            Free, publicly-accessible full text available December 18, 2025
- 
            Goal, S (Ed.)Machine Learning models are widely utilized in a variety of applications, including Intelligent Transportation Systems (ITS). As these systems are operating in highly dynamic environments, they are exposed to numerous security threats that cause Data Quality (DQ) variations. Among such threats are network attacks that may cause data losses. We evaluate the influence of these factors on the image DQ and consequently on the image ML model performance. We propose and investigate Federated Learning (FL) as the way to enhance the overall level of privacy and security in ITS, as well as to improve ML model robustness to possible DQ variations in real-world applications. Our empirical study conducted with traffic sign images and YOLO, VGG16 and ResNet models proved the greater robustness of FL-based architecture over a centralized one.more » « less
- 
            We present a novel approach for anomaly detection in a decentralized federated learning setting for edge units. We propose quantifiable metrics of Reputation and Trust that allow us to detect training anomalies on the local edge units during the learning rounds. Our approach can be combined with any aggregation method used on the server and does not impact the performance of the aggregation algorithm. Moreover, our approach allows to perform an audit of the training process of the participating edge units across training rounds based on our proposed metrics. We verify our approach in two distinct use cases: financial applications with the objective to detect anomalous transactions, and Intelligent Transportation System supposed to classify the input images. Our results confirm that our approach is capable of detecting training anomalies and even improving the effectiveness of the learning process if the anomalous edge units are excluded from the training process.more » « less
- 
            Goel, S (Ed.)Federated Learning (FL), an emerging decentralized Machine Learning (ML) approach, offers a promising avenue for training models on distributed data while safeguarding individual privacy. Nevertheless, when imple- mented in real ML applications, adversarial attacks that aim to deteriorate the quality of the local training data and to compromise the performance of the resulting model still remaining a challenge. In this paper, we propose and develop an approach that integrates Reputation and Trust techniques into the conventional FL. These techniques incur a novel local models’ pre-processing step performed before the aggregation procedure, in which we cluster the local model updates in their parameter space and employ clustering results to evaluate trust towards each of the local clients. The trust value is updated in each aggregation round, and takes into account retrospective evaluations performed in the previous rounds that allow considering the history of updates to make the assessment more informative and reliable. Through our empirical study on a traffic signs classification computer vision application, we verify our novel approach that allow to identify local clients compromised by adversarial attacks and submitting updates detrimental to the FL performance. The local updates provided by non-trusted clients are excluded from aggregation, which allows to enhance FL security and robustness to the models that might be trained on corrupted data.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available